不平衡的数据对基于深度学习的分类模型构成挑战。解决不平衡数据的最广泛使用的方法之一是重新加权,其中训练样本与损失功能的不同权重相关。大多数现有的重新加权方法都将示例权重视为可学习的参数,并优化了元集中的权重,因此需要昂贵的双重优化。在本文中,我们从分布的角度提出了一种基于最佳运输(OT)的新型重新加权方法。具体而言,我们将训练集视为其样品上的不平衡分布,该分布由OT运输到从元集中获得的平衡分布。训练样品的权重是分布不平衡的概率质量,并通过最大程度地减少两个分布之间的ot距离来学习。与现有方法相比,我们提出的一种方法可以脱离每次迭代时的体重学习对相关分类器的依赖性。图像,文本和点云数据集的实验表明,我们提出的重新加权方法具有出色的性能,在许多情况下实现了最新的结果,并提供了一种有希望的工具来解决不平衡的分类问题。
translated by 谷歌翻译
ALPA通过生成统一数据,操作员和管道并行性的执行计划来自动对大型深度学习(DL)模型的模型平行训练。现有的模型并行训练系统要求用户手动创建并行化计划,或者自动从有限的模型并行性配置中生成一个计划。它们不足以在分布式计算设备上扩展复杂的DL模型。 ALPA通过将并行性视为两个层次级别来分配大型DL模型的训练:操作员和操作员并行性。基于它,ALPA构建了一个新的分层空间,用于大规模的模型并行执行计划。 ALPA设计了许多汇编,以在每个并行性级别自动得出有效的并行执行计划。 ALPA实现了有效的运行时,以在分布式计算设备上协调两级并行执行。我们的评估表明,ALPA生成的并行化计划,即使在其设计的型号上,也可以匹配或超过手动模型并联训练系统。与专业系统不同,ALPA还推广到具有异质体系结构和模型的模型,而没有手动设计的计划。 ALPA的源代码可在https://github.com/alpa-projects/alpa上公开获得
translated by 谷歌翻译
眼科医生已经使用眼底图像筛选和诊断眼病。然而,不同的设备和眼科医生对眼底图像的质量产生了大的变化。低质量(LQ)降级的眼底图像在临床筛查中容易导致不确定性,并且通常会增加误诊的风险。因此,真实的眼底图像恢复值得研究。不幸的是,到目前为止,这项任务尚未探索真正的临床基准。在本文中,我们研究了真正的临床眼底图像恢复问题。首先,我们建立一个临床数据集,真实的眼底(RF),包括120个低质量和高质量(HQ)图像对。然后,我们提出了一种新型的变压器的生成对抗网络(RFRMANER)来恢复临床眼底图像的实际降级。我们网络中的关键组件是基于窗口的自我关注块(WSAB),其捕获非本地自我相似性和远程依赖性。为了产生更明显的令人愉悦的结果,介绍了一种基于变压器的鉴别器。在我们的临床基准测试中的广泛实验表明,所提出的rformer显着优于最先进的(SOTA)方法。此外,诸如船舶分割和光盘/杯子检测之类的下游任务的实验表明我们所提出的rformer益处临床眼底图像分析和应用。将发布数据集,代码和模型。
translated by 谷歌翻译
许多深度学习任务必须处理图表(例如,蛋白质结构,社交网络,源代码摘要树木)。由于这些任务的重要性,人们转向图形神经网络(GNN)作为图形学习的事实方法。由于他们的令人信服的表现,GNN已经被广泛应用。不幸的是,使用GNN的一个主要障碍是GNN需要大量的时间和资源来训练。最近,在图表数据上学习的新方法是图形神经切线内核(GNTK)[du,Hou,Salakhutdinov,Poczos,Wang和Xu 19]。 GNTK是曲线图数据上神经切线核(NTK)[Jacot,Gabriel和Hipller 18](一个内核方法)的应用,并解决NTK回归等同于使用梯度下降来训练无限宽的神经网络。使用GNTK的主要好处是,类似于任何内核方法,GNTK的参数可以直接在一步中解决。这可以避免耗时的梯度下降。同时,素描越来越多地用于加速各种优化问题,包括解决内核回归。给定$ N $ Graphs的内核矩阵,在解决内核回归中使用素描可以将运行时间减少到$ O(n ^ 3)$。但遗憾的是,此类方法通常需要关于内核矩阵的广泛知识,而在GNTK的情况下,我们发现内核矩阵的构造已经是$ O(n ^ 2n ^ 4)$,假设每个图都有$ n $节点。核矩阵施工时间可以是主要的性能瓶颈,当图的大小为$ n $增加时。因此,要问的自然问题是我们是否可以加快内核矩阵构造以改善GNTK回归的端到端运行时间。本文提供了第一种构建$ O(n ^ 2n ^ 3)$运行时间的内核矩阵的算法。
translated by 谷歌翻译
使用高级想法或知识不断学习新任务是人类的关键能力。在本文中,我们提出了用序贯线性时间逻辑公式和奖励机(LSRM)的终身加强学习,这使得代理能够利用以前学习的知识来紧固逻辑指定任务的学习。为了更灵活的任务规范,我们首先介绍连续的线性时间逻辑(SLTL),这是对现有线性时间逻辑(LTL)正式语言的补充。然后,我们利用奖励机(RM)利用具有高级别事件编码的任务的结构奖励功能,并提出RM的自动扩展和高效的知识转移在寿命中连续学习的任务。实验结果表明,LSRM通过在终身学习过程中使用SLTL和知识转移通过RM的任务分解来占据从头开始从头开始学习目标任务的方法。
translated by 谷歌翻译
深度学习方法表明了遥感高空间分辨率(HSR)覆盖映射的有希望的结果。然而,城乡场景可以呈现完全不同的地理景观,以及这些算法的不充分性妨碍了城市级或国家级映射。大多数现有的HSR土地覆盖数据集主要推动学习语义表示的研究,从而忽略了模型可转移性。在本文中,我们介绍了陆地覆盖域自适应语义分割(Loveda)数据集以推进语义和可转让的学习。 Loveda DataSet包含5987个HSR图像,具有来自三个不同城市的166768个注释对象。与现有数据集相比,Loveda DataSet包含两个域名(城乡),由于:1)多尺度对象,带来了相当大的挑战; 2)复杂的背景样本; 3)类分布不一致。 Loveda DataSet适用于土地覆盖语义分段和无监督域适应(UDA)任务。因此,我们在11个语义分割方法和八种UDA方法上基准测试了Loveda DataSet。还进行了一些探索性研究,包括多规范架构和策略,额外的背景监督和伪标签分析,以解决这些挑战。代码和数据在https://github.com/junjue-wang/loveda获得。
translated by 谷歌翻译
对于高空间分辨率(HSR)遥感图像,BITEMAREAL PORCHISE的学习始终使用许多成对标记的Bitemeral图像来统治变化检测。但是,成对标签大规模的HSR遥感图像非常昂贵且耗时。在本文中,我们提出了单个暂时的监督学习(StAR),以从新的角度利用不配对图像作为监督信号的对象变化的新角度进行变更检测。星星使我们只能使用\ textbf {未配对}标记的图像训练高准确的更改检测器,并将其推广到现实世界的零位图像。为了评估恒星的有效性,我们设计了一个名为Changestar的简单而有效的变更检测器,可以重复使用Changemixin模块的任何深层语义分割体系结构。全面的实验结果表明,在单个颞监督下,Changestar的表现优于基线,并在偶然的监督下实现了卓越的表现。代码可从https://github.com/z-zheng/changestar获得
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Benefiting from the intrinsic supervision information exploitation capability, contrastive learning has achieved promising performance in the field of deep graph clustering recently. However, we observe that two drawbacks of the positive and negative sample construction mechanisms limit the performance of existing algorithms from further improvement. 1) The quality of positive samples heavily depends on the carefully designed data augmentations, while inappropriate data augmentations would easily lead to the semantic drift and indiscriminative positive samples. 2) The constructed negative samples are not reliable for ignoring important clustering information. To solve these problems, we propose a Cluster-guided Contrastive deep Graph Clustering network (CCGC) by mining the intrinsic supervision information in the high-confidence clustering results. Specifically, instead of conducting complex node or edge perturbation, we construct two views of the graph by designing special Siamese encoders whose weights are not shared between the sibling sub-networks. Then, guided by the high-confidence clustering information, we carefully select and construct the positive samples from the same high-confidence cluster in two views. Moreover, to construct semantic meaningful negative sample pairs, we regard the centers of different high-confidence clusters as negative samples, thus improving the discriminative capability and reliability of the constructed sample pairs. Lastly, we design an objective function to pull close the samples from the same cluster while pushing away those from other clusters by maximizing and minimizing the cross-view cosine similarity between positive and negative samples. Extensive experimental results on six datasets demonstrate the effectiveness of CCGC compared with the existing state-of-the-art algorithms.
translated by 谷歌翻译